35 research outputs found

    Multisymplectic Lie group variational integrator for a geometrically exact beam in R3

    Get PDF
    In this paper we develop, study, and test a Lie group multisymplectic integra- tor for geometrically exact beams based on the covariant Lagrangian formulation. We exploit the multisymplectic character of the integrator to analyze the energy and momentum map conservations associated to the temporal and spatial discrete evolutions.Comment: Article in press. 22 pages, 18 figures. Received 20 November 2013, Received in revised form 26 February 2014, Accepted 27 February 2014. Communications in Nonlinear Science and Numerical Simulation. 201

    A Discrete Geometric Optimal Control Framework for Systems with Symmetries

    Get PDF
    This paper studies the optimal motion control of mechanical systems through a discrete geometric approach. At the core of our formulation is a discrete Lagrange-d’Alembert- Pontryagin variational principle, from which are derived discrete equations of motion that serve as constraints in our optimization framework. We apply this discrete mechanical approach to holonomic systems with symmetries and, as a result, geometric structure and motion invariants are preserved. We illustrate our method by computing optimal trajectories for a simple model of an air vehicle flying through a digital terrain elevation map, and point out some of the numerical benefits that ensue

    PAC-NMPC with Learned Perception-Informed Value Function

    Full text link
    Nonlinear model predictive control (NMPC) is typically restricted to short, finite horizons to limit the computational burden of online optimization. This makes a global planner necessary to avoid local minima when using NMPC for navigation in complex environments. For this reason, the performance of NMPC approaches are often limited by that of the global planner. While control policies trained with reinforcement learning (RL) can theoretically learn to avoid such local minima, they are usually unable to guarantee enforcement of general state constraints. In this paper, we augment a sampling-based stochastic NMPC (SNMPC) approach with an RL trained perception-informed value function. This allows the system to avoid observable local minima in the environment by reasoning about perception information beyond the finite planning horizon. By using Probably Approximately Correct NMPC (PAC-NMPC) as our base controller, we are also able to generate statistical guarantees of performance and safety. We demonstrate our approach in simulation and on hardware using a 1/10th scale rally car with lidar.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Laboratory Validation of Vision Based Grasping, Guidance and Control with Two Nanosatellite Models

    Get PDF
    The goal of this work is to demonstrate the autonomous proximity operation capabilities of a 3U scale cubesat in performing the simulated tasks of docking, charging, relative navigation, and deorbiting of space debris, as a step towards designing a fully robotic cubesat. The experiments were performed on an air-bearing testbed, using an engineering model of a 3U scale cubesat equipped with cold-gas propulsion. An appendage with a gripper is integrated into the model to enable grasping. Onboard vision and control algorithms are employed to perform precise navigation and manipulation tasks. Three experiments incorporating the tasks above have been successfully demonstrated. Hardware: The experimental setup consists of two 3U cubesat engineering models, an air-bearing testbed, and a motion capture system. The current cubesat model is derived from a previous version that has been used to demonstrate autonomous point-to-point navigation and obstacle avoidance tasks. The cubesat model consists of the following main subsystems: 3D printed cold-gas propulsion, sensing and computing, and power. In addition, we developed and integrated an appendage with a multipurpose end effector that is effective in grasping objects, docking to, and charging a second cubesat model. The sensor suite consists of pressure sensors, an inertial measurement unit (IMU), short range IR sensors, and a camera. An Odroid XU4 computer with an octa-core processor was chosen to satisfy the computational, power, and form constraints of the model. Software: The perception and control algorithms used for the proximity operations were developed and implemented using an open source robotics software framework called Robot Operating System (ROS) as a middleware for communication. The perception algorithm estimates the 3D pose and rate of change of the cubesat and objects of interest in its vicinity. The object detection requires a textured 3D model of objects and works by matching SURF features of a given image to those generated from the 3D model. The object tracking employs KLT tracking with outlier detection to obtain robust estimates. The textured 3D model is constructed from multi-view images, however, it can also be generated from CAD models. A state machine is employed to automatically switch between the desired control behaviors. Experiment: The system\u27s performance is validated through three experiments showcasing precise relative navigation, docking, and reconfiguration. The first experiment is a simple docking and reconfiguration maneuver, in which a primary cubesat detects and navigates to the closest face of a passive secondary cubesat, upon which it deploys its appendage and docks. The primary then navigates the joined system to a final goal position. In a variation of this experiment, after docking, the primary transmits power to the secondary which is indicated by an LED. The next experiment explores the scenario of debris deorbiting. Similar to the first experiment, the docking procedure is performed, followed by unlatching and release of the secondary with a desired velocity vector. In the last experiment, the primary and secondary execute relative navigation along a set path while maintaining formation. Additional details can be found here: https://asco.lcsr.jhu.edu/nanosatellite-guidance-navigation-and-contro

    Discrete Variational Optimal Control

    Full text link
    This paper develops numerical methods for optimal control of mechanical systems in the Lagrangian setting. It extends the theory of discrete mechanics to enable the solutions of optimal control problems through the discretization of variational principles. The key point is to solve the optimal control problem as a variational integrator of a specially constructed higher-dimensional system. The developed framework applies to systems on tangent bundles, Lie groups, underactuated and nonholonomic systems with symmetries, and can approximate either smooth or discontinuous control inputs. The resulting methods inherit the preservation properties of variational integrators and result in numerically robust and easily implementable algorithms. Several theoretical and a practical examples, e.g. the control of an underwater vehicle, will illustrate the application of the proposed approach.Comment: 30 pages, 6 figure

    Deep Learning Guided Autonomous Surgery: Guiding Small Needles into Sub-Millimeter Scale Blood Vessels

    Full text link
    We propose a general strategy for autonomous guidance and insertion of a needle into a retinal blood vessel. The main challenges underpinning this task are the accurate placement of the needle-tip on the target vein and a careful needle insertion maneuver to avoid double-puncturing the vein, while dealing with challenging kinematic constraints and depth-estimation uncertainty. Following how surgeons perform this task purely based on visual feedback, we develop a system which relies solely on \emph{monocular} visual cues by combining data-driven kinematic and contact estimation, visual-servoing, and model-based optimal control. By relying on both known kinematic models, as well as deep-learning based perception modules, the system can localize the surgical needle tip and detect needle-tissue interactions and venipuncture events. The outputs from these perception modules are then combined with a motion planning framework that uses visual-servoing and optimal control to cannulate the target vein, while respecting kinematic constraints that consider the safety of the procedure. We demonstrate that we can reliably and consistently perform needle insertion in the domain of retinal surgery, specifically in performing retinal vein cannulation. Using cadaveric pig eyes, we demonstrate that our system can navigate to target veins within 22ÎŒm\mu m XY accuracy and perform the entire procedure in less than 35 seconds on average, and all 24 trials performed on 4 pig eyes were successful. Preliminary comparison study against a human operator show that our system is consistently more accurate and safer, especially during safety-critical needle-tissue interactions. To the best of the authors' knowledge, this work accomplishes a first demonstration of autonomous retinal vein cannulation at a clinically-relevant setting using animal tissues
    corecore